Alpaca: A Strong, Replicable Instruction-Following Model
We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations.
GPT-3.5相当と報告されているらしい
We train the Alpaca model on 52K instruction-following demonstrations generated in the style of self-instruct using text-davinci-003.
https://crfm.stanford.edu/static/img/posts/2023-03-13-alpaca/alpaca_main.jpg